In this article, we propose an alternating directional 3-D quasi-recurrent neural network for hyperspectral image (HSI) denoising, which can effectively embed the domain knowledge--structural spatiospectral correlation and global correlation along spectrum (GCS). Specifically, 3-D convolution is utilized to extract structural spatiospectral correlation in an HSI, while a quasi-recurrent pooling function is employed to capture the GCS. Moreover, the alternating directional structure is introduced to eliminate the causal dependence with no additional computation cost. The proposed model is capable of modeling spatiospectral dependence while preserving the flexibility toward HSIs with an arbitrary number of bands. Extensive experiments on HSI denoising demonstrate significant improvement over the state-of-the-art under various noise settings, in terms of both restoration accuracy and computation time. Our code is available at https//github.com/Vandermode/QRNN3D.Deep neural networks (DNNs) thrive in recent years, wherein batch normalization (BN) plays an indispensable role. However, it has been observed that BN is costly due to the huge reduction and elementwise operations that are hard to be executed in parallel, which heavily reduces the training speed. To address this issue, in this article, we propose a methodology to alleviate the BN's cost by using only a few sampled or generated data for mean and variance estimation at each iteration. The key challenge to reach this goal is how to achieve a satisfactory balance between normalization effectiveness and execution efficiency. We identify that the effectiveness expects less data correlation in sampling while the efficiency expects more regular execution patterns. To this end, we design two categories of approach sampling or creating a few uncorrelated data for statistics' estimation with certain strategy constraints. The former includes ``batch sampling (BS)'' that randomly selects a few samples from each batch and ``feature sampling (FS)'' that randomly selects a small patch from each feature map of all samples, and the latter is ``virtual data set normalization (VDN)'' that generates a few synthetic random samples to directly create uncorrelated data for statistics' estimation. Accordingly, multiway strategies are designed to reduce the data correlation for accurate estimation and optimize the execution pattern for running acceleration in the meantime. The proposed methods are comprehensively evaluated on various DNN models, where the loss of model accuracy and the convergence rate are negligible. Without the support of any specialized libraries, 1.98x BN layer acceleration and 23.2% overall training speedup can be practically achieved on modern GPUs. Furthermore, our methods demonstrate powerful performance when solving the well-known ``micro-BN'' problem in the case of a tiny batch size. This article provides a promising solution for the efficient training of high-performance DNNs.This article investigates the problem of robust exponential stability of fuzzy switched memristive inertial neural networks (FSMINNs) with time-varying delays on mode-dependent destabilizing impulsive control protocol. The memristive model presented here is treated as a switched system rather than employing the theory of differential inclusion and set-value map. To optimize the robust exponentially stable process and reduce the cost of time, hybrid mode-dependent destabilizing impulsive and adaptive feedback controllers are simultaneously applied to stabilize FSMINNs. In the new model, the multiple impulsive effects exist between two switched modes, and the multiple switched effects may also occur between two impulsive instants. Based on switched analysis techniques, the Takagi-Sugeno (T-S) fuzzy method, and the average dwell time, extended robust exponential stability conditions are derived. Finally, simulation is provided to illustrate the effectiveness of the results.Concept drift refers to changes in the distribution of underlying data and is an inherent property of evolving data streams. https://www.selleckchem.com/products/cx-5461.html Ensemble learning, with dynamic classifiers, has proved to be an efficient method of handling concept drift. However, the best way to create and maintain ensemble diversity with evolving streams is still a challenging problem. In contrast to estimating diversity via inputs, outputs, or classifier parameters, we propose a diversity measurement based on whether the ensemble members agree on the probability of a regional distribution change. In our method, estimations over regional distribution changes are used as instance weights. Constructing different region sets through different schemes will lead to different drift estimation results, thereby creating diversity. The classifiers that disagree the most are selected to maximize diversity. Accordingly, an instance-based ensemble learning algorithm, called the diverse instance-weighting ensemble (DiwE), is developed to address concept drift for data stream classification problems. Evaluations of various synthetic and real-world data stream benchmarks show the effectiveness and advantages of the proposed algorithm.The conventional subspace clustering method obtains explicit data representation that captures the global structure of data and clusters via the associated subspace. However, due to the limitation of intrinsic linearity and fixed structure, the advantages of prior structure are limited. To address this problem, in this brief, we embed the structured graph learning with adaptive neighbors into the deep autoencoder networks such that an adaptive deep clustering approach, namely, autoencoder constrained clustering with adaptive neighbors (ACC_AN), is developed. The proposed method not only can adaptively investigate the nonlinear structure of data via a parameter-free graph built upon deep features but also can iteratively strengthen the correlations among the deep representations in the learning process. In addition, the local structure of raw data is preserved by minimizing the reconstruction error. Compared to the state-of-the-art works, ACC_AN is the first deep clustering method embedded with the adaptive structured graph learning to update the latent representation of data and structured deep graph simultaneously.